Goto

Collaborating Authors

 solution cost


A Extension to k-Means and (k, p)-Clustering

Neural Information Processing Systems

The lower bound on opt( U) given in Lemma B.10 holds for ρ -metric spaces with no modifications. By making the appropriate modifications to the proof of Theorem C.1, we can extend this theorem to In particular, we can obtain a proof of Theorem A.5 by taking the proof of Theorem C.1 and adding extra ρ factors whenever the triangle inequality is applied. We first prove Lemma B.1, which shows that the sizes of the sets U By Lemma B.2, we get that Henceforth, we fix some positive ξ and sufficiently large α such that Lemma B.3 holds. By now applying Lemma B.4 it follows that The following lemma is proven in [25]. Lemma B.1, the third inequality follows from Lemma B.7, and the fourth inequality follows from the The second inequality follows from Lemma B.8, the third inequality from averaging and the choice Proof of Lemma 3.3: It follows that with probability at least 1 e Hence, by Theorem D.1, we must have that O (poly( k)) query time must have Ω( k) amortized update time.


Fully Dynamic k-Clustering in O (k) Update Time

Neural Information Processing Systems

Clustering is a fundamental problem in unsupervised learning with several practical applications. In clustering, one is interested in partitioning elements into different groups (i.e.



Simulating classification models to evaluate Predict-Then-Optimize methods

Smet, Pieter

arXiv.org Artificial Intelligence

Uncertainty in optimization is often represented as stochastic parameters in the optimization model. In Predict-Then-Optimize approaches, predictions of a machine learning model are used as values for such parameters, effectively transforming the stochastic optimization problem into a deterministic one. This two-stage framework is built on the assumption that more accurate predictions result in solutions that are closer to the actual optimal solution. However, providing evidence for this assumption in the context of complex, constrained optimization problems is challenging and often overlooked in the literature. Simulating predictions of machine learning models offers a way to (experimentally) analyze how prediction error impacts solution quality without the need to train real models. Complementing an algorithm from the literature for simulating binary classification, we introduce a new algorithm for simulating predictions of multiclass classifiers. We conduct a computational study to evaluate the performance of these algorithms, and show that classifier performance can be simulated with reasonable accuracy, although some variability is observed. Additionally, we apply these algorithms to assess the performance of a Predict-Then-Optimize algorithm for a machine scheduling problem. The experiments demonstrate that the relationship between prediction error and how close solutions are to the actual optimum is non-trivial, highlighting important considerations for the design and evaluation of decision-making systems based on machine learning predictions.


A Extension to k-Means and (k, p)-Clustering

Neural Information Processing Systems

The lower bound on opt( U) given in Lemma B.10 holds for ρ -metric spaces with no modifications. By making the appropriate modifications to the proof of Theorem C.1, we can extend this theorem to In particular, we can obtain a proof of Theorem A.5 by taking the proof of Theorem C.1 and adding extra ρ factors whenever the triangle inequality is applied. We first prove Lemma B.1, which shows that the sizes of the sets U By Lemma B.2, we get that Henceforth, we fix some positive ξ and sufficiently large α such that Lemma B.3 holds. By now applying Lemma B.4 it follows that The following lemma is proven in [25]. Lemma B.1, the third inequality follows from Lemma B.7, and the fourth inequality follows from the The second inequality follows from Lemma B.8, the third inequality from averaging and the choice Proof of Lemma 3.3: It follows that with probability at least 1 e Hence, by Theorem D.1, we must have that O (poly( k)) query time must have Ω( k) amortized update time.


Fully Dynamic k-Clustering in O (k) Update Time

Neural Information Processing Systems

Clustering is a fundamental problem in unsupervised learning with several practical applications. In clustering, one is interested in partitioning elements into different groups (i.e.


BBoE: Leveraging Bundle of Edges for Kinodynamic Bidirectional Motion Planning

Raghu, Srikrishna Bangalore, Roncone, Alessandro

arXiv.org Artificial Intelligence

Abstract-- In this work, we introduce BBoE, a bidirectional, kinodynamic, sampling-based motion planner that consistently and quickly finds low-cost solutions in environments with varying obstacle clutter . The algorithm combines exploration and exploitation while relying on precomputed robot state traversals, resulting in efficient convergence towards the goal. Our key contributions include: i) a strategy to navigate through obstacle-rich spaces by sorting and sequencing preprocessed forward propagations; and ii) BBoE, a robust bidirectional kinodynamic planner that utilizes this strategy to produce fast and feasible solutions. The proposed framework reduces planning time, diminishes solution cost and increases success rate in comparison to previous approaches. I. INTRODUCTION Motion planning in robotics involves identifying a series of valid configurations that a robot can assume to transition from an initial state to a desired goal state. Sampling-based planning is a popular graph-based approach used to generate robot motions by sampling discrete states and establishing connections between them via edges [23]. Their popularity is due to the inherent property of probabilistic completeness, which guarantees that a solution will be found, if one exists, as the number of sampled states reaches infinity [17], [10]. Traditionally, these techniques employ a unidirectional tree that grows from the start state and expands towards the goal region [17], [10], [6].


Accelerating Vehicle Routing via AI-Initialized Genetic Algorithms

Greenberg, Ido, Sielski, Piotr, Linsenmaier, Hugo, Gandham, Rajesh, Mannor, Shie, Fender, Alex, Chechik, Gal, Meirom, Eli

arXiv.org Artificial Intelligence

Vehicle Routing Problems (VRP) are an extension of the Traveling Salesperson Problem and are a fundamental NP - hard challenge in combinatorial optimization. Solving VRP in real - time at large scale has become critical in numerous applications, from growing markets like last - mile delivery to emerging use - cases like interactive logistics planning. In many applications, one has to repeatedly solv e VRP instances dr a wn from the same distribution, yet current state - of - the - art solvers treat each instance on its own without leveraging previous examples . We introduce a n optimization framework where a reinforcement learning agent is trained on prior instances and quickly generate s initial solutions, which are then further optimized by a genetic algorithm. This framework, Evolutionary Algorithm with Reinforcement Learning Initialization ( EARLI), consistently outperforms current state - of - the - art solvers across various time budgets . For example, EARLI handles vehicle routing with 500 locations within one second, 10x faster than current solvers for the same solution quality, enabling real - time and interactive routing at scale . EARLI can generalize to new data, as we demonstrate on real e - commerce delivery data of a previously unseen city . By combin ing reinforcement learning and genetic algorithms, o ur hybrid framework takes a step forward to closer interdisciplinary collaboration between AI and optimization communities towards real - time optimization in diverse domains .


Estimated Informed Anytime Search for Sampling-Based Planning via Adaptive Sampler

Zhang, Liding, Cai, Kuanqi, Zhang, Yu, Bing, Zhenshan, Wang, Chaoqun, Wu, Fan, Haddadin, Sami, Knoll, Alois

arXiv.org Artificial Intelligence

Path planning in robotics often involves solving continuously valued, high-dimensional problems. Popular informed approaches include graph-based searches, such as A*, and sampling-based methods, such as Informed RRT*, which utilize informed set and anytime strategies to expedite path optimization incrementally. Informed sampling-based planners define informed sets as subsets of the problem domain based on the current best solution cost. However, when no solution is found, these planners re-sample and explore the entire configuration space, which is time-consuming and computationally expensive. This article introduces Multi-Informed Trees (MIT*), a novel planner that constructs estimated informed sets based on prior admissible solution costs before finding the initial solution, thereby accelerating the initial convergence rate. Moreover, MIT* employs an adaptive sampler that dynamically adjusts the sampling strategy based on the exploration process. Furthermore, MIT* utilizes length-related adaptive sparse collision checks to guide lazy reverse search. These features enhance path cost efficiency and computation times while ensuring high success rates in confined scenarios. Through a series of simulations and real-world experiments, it is confirmed that MIT* outperforms existing single-query, sampling-based planners for problems in R^4 to R^16 and has been successfully applied to real-world robot manipulation tasks. A video showcasing our experimental results is available at: https://youtu.be/30RsBIdexTU


APT*: Asymptotically Optimal Motion Planning via Adaptively Prolated Elliptical R-Nearest Neighbors

Zhang, Liding, Wang, Sicheng, Cai, Kuanqi, Bing, Zhenshan, Wu, Fan, Wang, Chaoqun, Haddadin, Sami, Knoll, Alois

arXiv.org Artificial Intelligence

Optimal path planning aims to determine a sequence of states from a start to a goal while accounting for planning objectives. Popular methods often integrate fixed batch sizes and neglect information on obstacles, which is not problem-specific. This study introduces Adaptively Prolated Trees (APT*), a novel sampling-based motion planner that extends based on Force Direction Informed Trees (FDIT*), integrating adaptive batch-sizing and elliptical $r$-nearest neighbor modules to dynamically modulate the path searching process based on environmental feedback. APT* adjusts batch sizes based on the hypervolume of the informed sets and considers vertices as electric charges that obey Coulomb's law to define virtual forces via neighbor samples, thereby refining the prolate nearest neighbor selection. These modules employ non-linear prolate methods to adaptively adjust the electric charges of vertices for force definition, thereby improving the convergence rate with lower solution costs. Comparative analyses show that APT* outperforms existing single-query sampling-based planners in dimensions from $\mathbb{R}^4$ to $\mathbb{R}^{16}$, and it was further validated through a real-world robot manipulation task. A video showcasing our experimental results is available at: https://youtu.be/gCcUr8LiEw4